Your browser doesn't support javascript.
Show: 20 | 50 | 100
Results 1 - 20 de 43
Filter
1.
Proceedings of SPIE - The International Society for Optical Engineering ; 12415, 2023.
Article in English | Scopus | ID: covidwho-20244908

ABSTRACT

Rigorous Coupled Wave Analysis (RCWA) method is highly efficient for the simulation of diffraction efficiency and field distribution patterns in periodic structures and textured optoelectronic devices. GPU has been increasingly used in complex scientific problems such as climate simulation and the latest Covid-19 spread model. In this paper, we break down the RCWA simulation problem to key computational steps (eigensystem solution, matrix inversion/multiplication) and investigate speed performance provided by optimized linear algebra GPU libraries in comparison to multithreaded Intel MKL CPU library running on IRIDIS 5 supercomputer (1 NVIDIA v100 GPU and 40 Intel Xeon Gold 6138 cores CPU). Our work shows that GPU outperforms CPU significantly for all required steps. Eigensystem solution becomes 60% faster, Matrix inversion improves with size achieving 8x faster for large matrixes. Most significantly, matrix multiplication becomes 40x faster for small and 5x faster for large matrix sizes. © 2023 SPIE.

2.
2023 25th International Conference on Digital Signal Processing and its Applications, DSPA 2023 ; 2023.
Article in English | Scopus | ID: covidwho-20237784

ABSTRACT

The study is devoted to a comparative analysis and retrospective evaluation of laboratory and instrumental data with the severity of lung tissue damage in COVID-19 of patients with COVID-19. An improvement was made in the methodology for interpreting and analyzing dynamic changes associated with COVID-19 on CT images of the lungs. The technique includes the following steps: pre-processing, segmentation with color coding, calculation and evaluation of signs to highlight areas with probable pathology (including combined evaluation of signs). Analysis and interpretation is carried out on the emerging database of patients. At the same time the following indicators are distinguished: the results of the analysis of CT images of the lungs in dynamics;the results of the analysis of clinical and laboratory data (severity course of the disease, temperature, saturation, etc.). The results of laboratory studies are analyzed with an emphasis on the values of the main indicator - interleukin-6. This indicator is a marker of significant and serious changes characterizing the severity of the patient's condition. © 2023 IEEE.

3.
Concurrency and Computation: Practice and Experience ; 2023.
Article in English | Scopus | ID: covidwho-2323991

ABSTRACT

In this article, the detection of COVID-19 patient based on attention segmental recurrent neural network (ASRNN) with Archimedes optimization algorithm (AOA) using ultra-low-dose CT (ULDCT) images is proposed. Here, the ultra-low-dose CT images are gathered via real time dataset. The input images are preprocessed with the help of convolutional auto-encoder to recover the ULDCT images quality by removing noises. The preprocessed images are given to generalized additive models with structured interactions (GAMI) for extracting the radiomic features. The radiomic features, such as morphologic, gray scale statistic, Haralick texture are extracted using GAMI-Net. The ASRNN classifier, whose weight parameters optimized with Archimedes optimization algorithm enables COVID-19 ULDCT images classification as COVID-19 or normal. The proposed approach is activated in MATLAB platform. The proposed ASRNN-AOA-ULDCT attains accuracy 22.08%, 24.03%, 34.76%, 34.65%, 26.89%, 45.86%, and 32.14%;precision 23.34%, 26.45%, 34.98%, 27.06%, 35.87%, 34.44%, and 22.36% better than the existing methods, such as DenseNet-HHO-ULDCT, ELM-DNN-ULDCT, EDL-ULDCT, ResNet 50-ULDCT, SDL-ULDCT, CNN-ULDCT, and DRNN-ULDCT, respectively. © 2023 John Wiley & Sons, Ltd.

4.
1st International Conference on Futuristic Technologies, INCOFT 2022 ; 2022.
Article in English | Scopus | ID: covidwho-2318456

ABSTRACT

Automated diagnosis of COVID-19 based on CTScan images of the lungs has caught maximum attention by many researchers in recent times. The rationale of this work is to exploit the texture patterns viz. deep learning networks so that it reduces the intra-class similarities among the patterns of COVID-19, Pneumonia and healthy class samples. The challenge of understanding the concurrence of the patterns of COVID-19 with other closely related patterns of other lung diseases is a new challenge. In this paper, a fine-tuned variational deep learning architecture named Deep CT-NET for COVID-19 diagnosis is proposed. Variation modelling to Deep CT-NET is evaluated using Resnet50, Xception, InceptionV3 and VGG19. Initially, grey level texture features are exploited to understand the correlation characteristics between these grey level patterns of COVID-19, Pneumonia and Healthy class samples. CT scan image dataset of 20,978 images was used for experimental analysis to assess the performance of Deep CT-NET viz., all mentioned models. Evaluation outcomes reveals that Resnet50, Xception, and InceptionV3 producing better performance with testing accuracy more than 96% in comparison with VGG19. © 2022 IEEE.

5.
Lecture Notes on Data Engineering and Communications Technologies ; 158:227-235, 2023.
Article in English | Scopus | ID: covidwho-2299510

ABSTRACT

The Coronavirus pandemic COVID-19 which has been declared as a pandemic by the World Health Organization has infected more than 212,165,567 and fatality figure of 4,436,957 as of 22nd August 2021. This infection develops into pneumonia which causes breathing problem;this can be detected using chest x-rays or CT scan. This work aims to produce an automated way of detecting the presence of COVID-19 infection using chest X-rays as a part of transfer learning strategy to extract numerical features out of an image using pre trained models as feature extractors. Then construct a secondary data set out of these features, and use these features which are simple numerical vectors represented in tabular form as an input to simple machine learning classifiers that work well with numerical data in tabular form such as SVM, KNN, Logistic regression and Naive Bayes. This work also aims to extract features using texture-based techniques such as GLCM and use the GLCM to obtain 2nd order statistical features and construct another secondary data set based on texture-based feature extraction techniques on images. These features are again fed into simple machine learning classifiers mentioned above. A comparison is done, between deep learning feature extraction strategies and texture-based feature extraction strategies and the results are compared and analyzed. Considering the deep learning strategies Mobile Net with SVM perform the best with 0.98 test accuracy, followed by logistic regression, KNN and Naive Bayes algorithm. With respect to GLCM feature extraction strategy, KNN with test accuracy with 0.96 performed the best, followed by logistic regression, SVM and naive Bayes. Overall performance wise deep learning strategies proved to be effective but in terms of calculation time and number of features, texture-based strategy of GLCM proved effective. © 2023, The Author(s), under exclusive license to Springer Nature Switzerland AG.

6.
IEEE Transactions on Instrumentation and Measurement ; : 1-1, 2023.
Article in English | Scopus | ID: covidwho-2296656

ABSTRACT

Recently, accurate segmentation of COVID-19 infection from computed tomography (CT) scans is critical for the diagnosis and treatment of COVID-19. However, infection segmentation is a challenging task due to various textures, sizes and locations of infections, low contrast, and blurred boundaries. To address these problems, we propose a novel Multi-scale Wavelet Guidance Network (MWG-Net) for COVID-19 lung infection by integrating the multi-scale information of wavelet domain into the encoder and decoder of the convolutional neural network (CNN). In particular, we propose the Wavelet Guidance Module (WGM) and Wavelet &Edge Guidance Module (WEGM). Among them, the WGM guides the encoder to extract infection details through the multi-scale spatial and frequency features in the wavelet domain, while the WEGM guides the decoder to recover infection details through the multi-scale wavelet representations and multi-scale infection edge information. Besides, a Progressive Fusion Module (PFM) is further developed to aggregate and explore multi-scale features of the encoder and decoder. Notably, we establish a COVID-19 segmentation dataset (named COVID-Seg-100) containing 5800+ annotated slices for performance evaluation. Furthermore, we conduct extensive experiments to compare our method with other state-of-the-art approaches on our COVID-19-Seg-100 and two publicly available datasets, i.e., MosMedData and COVID-SemiSeg. The results show that our MWG-Net outperforms state-of-the-art methods on different datasets and can achieve more accurate and promising COVID-19 lung infection segmentation. IEEE

7.
1st IEEE International Interdisciplinary Humanitarian Conference for Sustainability, IIHC 2022 ; : 1196-1199, 2022.
Article in English | Scopus | ID: covidwho-2277670

ABSTRACT

The new Corona Virus (COVID-19) is a pandemic of unthinkable scope and magnitude that is posing a significant threat to the medical business worldwide in the twenty-first century. To a greater extent, it has fundamentally altered the texture of life. The growing number of people dying as a result of sickness has instilled fear in the minds of many who are hesitant to seek even basic medical help. And, in light of the recent COVID-19 scenario and the growing number of affected people, researchers began to focus on ways to communicate and monitor patient information remotely in order to reduce the risk of getting infected. The Internet of Things (IoT) is one of the booming technologies in the medical and industrial fields. Patients could benefit from the proposed device because it can monitor and diagnose their health status. This study describes a gadget that measures and records heart rate, body temperature, and CT imaging. These records will be measured and sent to the cloud server using an Arduino device with sensors. © 2022 IEEE.

8.
International Journal of Imaging Systems and Technology ; 2023.
Article in English | Scopus | ID: covidwho-2275837

ABSTRACT

COVID-19 is a deadly and fast-spreading disease that makes early death by affecting human organs, primarily the lungs. The detection of COVID in the early stages is crucial as it may help restrict the spread of the progress. The traditional and trending tools are manual, time-inefficient, and less accurate. Hence, an automated diagnosis of COVID is needed to detect COVID in the early stages. Recently, several methods for exploiting computed tomography (CT) scan pictures to detect COVID have been developed;however, none are effective in detecting COVID at the preliminary phase. We propose a method based on two-dimensional variational mode decomposition in this work. This proposed approach decomposes pre-processed CT scan pictures into sub-bands. The texture-based Gabor filter bank extracts the relevant features, and the student's t-value is used to recognize robust traits. After that, linear discriminative analysis (LDA) reduces the dimensionality of features and provides ranks for robust features. Only the first 14 LDA features are qualified for classification. Finally, the least square- support vector machine (SVM) (radial basis function) classifier distinguishes between COVID and non-COVID CT lung images. The results of the trial showed that our model outperformed cutting-edge methods for COVID classification. Using tenfold cross-validation, this model achieved an improved classification accuracy of 93.96%, a specificity of 95.59%, and an F1 score of 93%. To validate our proposed methodology, we conducted different relative experiments with deep learning and traditional machine learning-based models like random forest, K-nearest neighbor, SVM, convolutional neural network, and recurrent neural network. The proposed model is ready to help radiologists identify diseases daily. © 2023 Wiley Periodicals LLC.

9.
International Journal of Food Science and Technology ; 2023.
Article in English | Scopus | ID: covidwho-2274012

ABSTRACT

Olfactory dysfunction (impaired sense of smell) impacts flavour perception and subsequent appetite, potentially leading to malnutrition and affective changes. This tends to develop during the early stages of SARS-CoV-2 infection and may progress into long-term olfactory loss. Therefore, specialised food designs are needed to encourage a healthy, yet pleasurable eating experience for this population. This review aims to discuss food design strategies for satisfying the sensorial and nutritional needs that could be applicable to SARS-CoV-2 patients with mild olfactory dysfunction. Key literature on food design studies suitable for individuals suffering from olfactory and gustatory dysfunction was reviewed, including strategies for flavour enhancement, colour enhancement, texture enhancement including through trigeminal stimulation, fortification of macronutrients, micronutrients and fibre. Potential gaps and application of strategies to offer appealing and nutritious food designs to long SARS-CoV-2 patients to improve their quality of life were explored. © 2023 The Authors. International Journal of Food Science & Technology published by John Wiley & Sons Ltd on behalf of Institute of Food, Science and Technology (IFSTTF).

10.
5th International Seminar on Research of Information Technology and Intelligent Systems, ISRITI 2022 ; : 514-519, 2022.
Article in English | Scopus | ID: covidwho-2265108

ABSTRACT

Dental caries sufferers in Indonesia demonstrate a higher frequency than other dental diseases even before the Covid-19 pandemic. The high risk of spreading the virus during the pandemic hinders handling dental care patients. Teledentistry is suggested as the main alternative to reduce the risk of spreading the virus. This study aims to establish a system for classifying the level of dental caries based on texture applicable for clinical implementation. Dental caries images were extracted using the Gabor Filter method and classified using the Support Vector Machine (SVM) and K-Nearest Neighbor (K-NN). A downsampling technique was applied to this system to reduce the large number of features affecting the classification time. System testing revealed that the Cubic SVM model generated the best result: Accuracy of 90.5%, precision of 89.75%, recall of 89.25%, specificity of 91.75%, and f-score of 88.5%. © 2022 IEEE.

11.
ACS Applied Polymer Materials ; 2022.
Article in English | Scopus | ID: covidwho-2285232

ABSTRACT

The current global health crisis caused by the SARS-CoV-2 virus (COVID-19) has increased the use of personal protective equipment, especially face masks, leading to the disposal of a large amount of plastic waste causing an environmental crisis due to the use of non-biodegradable and non-recyclable polymers, such as polypropylene and polyester. In this work, an eco-friendly biopolymer, polylactic acid (PLA), was used to manufacture hierarchical nanoporous microfiber biofilters via a single-step rotary jet spinning (RJS) technique. The process parameters that aid the formation of nanoporosity within the microfibers were discussed. The microstructure of the fibers was analyzed by scanning electron microscopy (SEM) and a noninvasive X-ray microtomography (XRM) technique was employed to study the three-dimensional (3D) morphology and the porous architecture. Particulate matter (PM) and aerosol filtration efficiency were tested by OSHA standards with a broad range (10-1000 nm) of aerosolized saline droplets. The viral penetration efficiency was tested using the ΦX174 bacteriophage (∼25 nm) with an envelope, mimicking the spike protein structure of SARS-CoV-2. Although these fibers have a similar size used in N95 filters, the developed biofilters present superior filtration efficiency (∼99%) while retaining better breathability (<4% pressure drop) than N95 respirator filters. © 2023 American Chemical Society

12.
International Journal of Imaging Systems and Technology ; 2023.
Article in English | Scopus | ID: covidwho-2248212

ABSTRACT

The conventional approach for identifying ground glass opacities (GGO) in medical imaging is to use a convolutional neural network (CNN), a subset of artificial intelligence, which provides promising performance in COVID-19 detection. However, CNN is still limited in capturing structured relationships of GGO as the texture and shape of the GGO can be confused with other structures in the image. In this paper, a novel framework called DeepChestNet is proposed that leverages structured relationships by jointly performing segmentation and classification on the lung, pulmonary lobe, and GGO, leading to enhanced detection of COVID-19 with findings. The performance of DeepChestNet in terms of dice similarity coefficient is 99.35%, 99.73%, and 97.89% for the lung, pulmonary lobe, and GGO segmentation, respectively. The experimental investigations on DeepChestNet-Lung, DeepChestNet-Lobe and DeepChestNet-COVID datasets, and comparison with several state-of-the-art approaches reveal the great potential of DeepChestNet for diagnosis of COVID-19 disease. © 2023 Wiley Periodicals LLC.

13.
3rd International Conference on Communication, Computing and Industry 40, C2I4 2022 ; 2022.
Article in English | Scopus | ID: covidwho-2278917

ABSTRACT

Since the onset of the coronavirus pandemic, education in schools and colleges have been imparted through different online platforms like Zoom, MS teams, skype and more. Students have found ways to avoid attending lectures by manipulating camera angles, putting up recordings of their faces, fiddling with their phones and laptops, joining late during online classes or even napping during lectures. It is impossible for the faculty members to keep an eye on fifty or more students and teach the class at the same time. It is difficult to track record of how much attention each student is paying during classes. Many students waste plenty of time sitting idle in front of the teachers. They and their parents need to be made aware of how much portion of the class their ward is actually attentive and how much they are not. Teachers need to know and keep track of their students' activities during class and monitor their performances. Through this prototype we aim to solve and tackle the aforementioned problems and barriers that teachers and professors face to properly counsel their students during the pandemic or online lectures as a whole. OpenCV face recognition, tracking, eye detection will be used to read facial textures of the students during online classes. Immediate notification will be sent to student's phone via mail if they are not paying attention in class. IFT TT/ SMTP will be used to send messages through cross-devices. A mobile application on the teacher's phone will keep track of students' activities. © 2022 IEEE.

14.
2nd International Conference on Signal and Information Processing, IConSIP 2022 ; 2022.
Article in English | Scopus | ID: covidwho-2233270

ABSTRACT

As a result of the COVID-19 pandemic, medical examinations (RTPCR, X-ray, CT-Scan, etc.) may be required to make a medical decision. COVID-19's SARS-CoV-2 virus infects and spreads in the lungs, which can be easily recognized by chest X-rays or CT scans. However, along with COVID-19 instances, cases of another respiratory ailment known as Pneumonia began to rise. As a result, clinicians are having difficulty distinguishing between COVID-19 and Pneumonia. So, more tests were required to identify the condition. After a few days, the COVID-19 SARS-CoV-2 virus multiplied in the lungs, causing pneumonia and COVID-19 named Novel Corona virus infected Pneumonia. We employ Machine Learning and Deep Learning models to predict diseases such as COVID-19 Positive, COVID-19 Negative, and Viral Pneumonia in this research. A dataset of data is used in a Machine Learning model. A dataset of 120 images was used in the Machine Learning model. By extracting eight statistical elements from an image texture, we calculated accuracy. Adaboost, Decision Tree & Naive Bayes have overall accuracy of 88.46%, 86.4% and 80%, respectively. When we compared the algorithms, Adaboost algorithm performs the best, with overall accuracy of 88.46%, sensitivity of 84.62%, specificity of 92.31%, F1-score of 88% and Kappa of 0.8277. VGG16 Architecture is used in CNN model for 838 images in Deep Learning model. The model's total accuracy is 99.17 %. © 2022 IEEE.

15.
25th International Symposium on Measurement and Control in Robotics, ISMCR 2022 ; 2022.
Article in English | Scopus | ID: covidwho-2191970

ABSTRACT

In recent years, the spread of infectious diseases, such as COVID-19, has increased the need for medical examinations to avoid contact between doctors and patients. Most treatments, especially dermatology, require palpation, and its impact is significant. In this study, we aimed to reproduce the judgment of the softness and surface textures of diseased parts, which is important to dermatologists for determining the condition, using a simple robot device. Five levels of softness and three types of surface textures labeled with 14 types of materials were obtained from interviews with dermatologists. To acquire a haptic response from materials during pushing, 1) a single-rod probe with a haptic sensor using a linear actuator and 2) a dual-rod type configuration to obtain vibration propagation was constructed. Frequency-analyzed images were produced from the obtained waveforms of force and acceleration. A total of 343 images from 13 materials were used for transfer learning and were classified using AlexNet. The classification accuracy of the single-rod probe was 93.0%, and that of the dual-probe configuration was 95.2%. The classification accuracy was improved using the dual probe configuration than the single one;the softness classification accuracy was improved from 93.8% (single-rod) to 95.7% (dual-rod configuration). The surface texture classification accuracy was improved from 91.9% (single-rod) to 92.8% (dual-rod configuration), respectively. Therefore, the proposed method enables the reproduction of the judgment of five-level softness and three types of surface texture judgment by dermatologists. © 2022 IEEE.

16.
5th International Conference on Information and Communications Technology, ICOIACT 2022 ; : 497-502, 2022.
Article in English | Scopus | ID: covidwho-2191900

ABSTRACT

Covid-19 remains the worldwide highlight because it is still growing rapidly and has greatly impacted human activities. Preventing its transmission by detecting to allow other actions to be taken continues to be carried out. Various research efforts have been performed to detect Covid-19. Along with developing its detection, technology can be conducted by image processing or machine learning. The detection in this study was carried out using X-ray images of Covid-19 positive people, totaling 101 images, propagated through pre-processing to 404 images. Then, these images were compared with the X-ray images of normal people amounting to 202 and the X-ray images of pneumonia-positive people totaling 390. The extraction process was performed using the Haar wavelet transformation by classifying the data using Support Vector Machine (SVM) and K-Nearest Neighbor (KNN) methods. The Fine KNN model obtained the best accuracy with an average of 94.66%. © 2022 IEEE.

17.
45th Mexican Conference on Biomedical Engineering, CNIB 2022 ; 86:382-392, 2023.
Article in English | Scopus | ID: covidwho-2148585

ABSTRACT

Although real time polymerase chain reaction test (RT-PCR) is the gold standard method for the diagnosis of COVID-19 patients, the use of Computed Tomography (CT) images for diagnosis, assessment of the severity of this disease and its evolution is widely accepted due to the possibility to observe the lungs damage. This evaluation is mainly made qualitatively, therefore, techniques have been proposed to obtain relevant additional clinical information, such as texture features. In this work, CT scans from 46 patients with COVID-19 were used to characterize the lungs by means of textural features. In the proposed approach, pulmonary parenchyma was delimited using a U-NET previously trained with images from different pulmonary diseases. Texture metrics were calculated using co-occurrence and run-length matrices considering both lungs, right and left lung, as well as apex, middle zone and base lung regions. A boxplot descriptive analysis was performed looking for significant differences between regions of each estimated texture metric. Results show that Gray Level Non-Uniformity (GLNU) and Run-Length Non-Uniformity (RLNU) features have more significant differences between regions, suggesting that these metrics may provide a proper characterization of the pulmonary damage caused by COVID-19. © 2023, The Author(s), under exclusive license to Springer Nature Switzerland AG.

18.
2nd Asian Conference on Innovation in Technology, ASIANCON 2022 ; 2022.
Article in English | Scopus | ID: covidwho-2136104

ABSTRACT

The access control or gaining permission to do a particular activity without claiming your identity is an identification process which involves one is to all matchings. The proposed multi-instance biometric identification utilizes multiple instances of finger vein to identify a person with secured templates. Common pre-processing and texture-based feature extraction is applied to two finger vein instances followed by feature fusion with dimensional reduction keeping the discriminative features to further simplify the system design making it a real time application. Secured templates are created out of fused finger vein features subjected to Gaussian random projection-based Index-of-Max hashing. The hashed templates are not only secured but also forms compact integer vector requiring low storage space and matching time. Use of multiple instances increases the universality of identification and finger vein makes the system contactless. This contactless property of an authentication system highly recommends its use for any infectious situations like covid-19. © 2022 IEEE.

19.
Computer Vision and Image Understanding ; 226, 2023.
Article in English | Scopus | ID: covidwho-2130572

ABSTRACT

Periocular is one of the promising biometric traits for human recognition. It encompasses a surrounding area of eyes that includes eyebrows, eyelids, eyelashes, eye-folds, eyebrows, eye shape, and skin texture. Its relevance is more emphasize during the COVID-19 pandemic due to the masked faces. So, this article presents a detailed review of periocular biometrics to understand its current state. The paper first discusses the various face and periocular techniques, specially designed to recognize humans wearing a face mask. Then, different aspects of periocular biometrics are reviewed: (a) the anatomical cues present in the periocular region useful for recognition, (b) the various feature extraction and matching techniques developed, (c) recognition across different spectra, (d) fusion with other biometric modalities (face or iris), (e) recognition on mobile devices, (f) its usefulness in other applications, (g) periocular datasets, and (h) competitions organized for evaluating the efficacy of this biometric modality. Finally, various challenges and future directions in the field of periocular biometrics are presented. © 2022

20.
6th IEEE International Conference on Cybernetics and Computational Intelligence, CyberneticsCom 2022 ; : 393-397, 2022.
Article in English | Scopus | ID: covidwho-2051962

ABSTRACT

This paper describes research on texture feature extraction for COVID-19 detection. Fractal Dimension Texture Analysis (FDTA) and Gray Level Co-occurrence Matrix (GLCM) were used for feature extraction. A dense neural network is used for classification. Three classes were used for classification to classify Normal, COVID-19, and Other pneumonia. The data entered in the texture feature extraction is a chest x-ray (CXR) image that is grey scaled and resized into 400400 pixels. Performance analysis of the model uses a confusion matrix. The best performance feature extraction method for detecting COVID-19 is FDTA, with an accuracy testing of 62.5%. © 2022 IEEE.

SELECTION OF CITATIONS
SEARCH DETAIL